OPM: a parallel method to solve banded systems of equations
ثبت نشده
چکیده
studied in this report. We can see, in general, that low accuracies will require less overlapping and, in those cases, the algorithm will run faster. As the diagonal dominance increases, the amount of overlapping will be lower and the timings for different accuracies within the same diagonal dominance will get closer and closer. In this report we presented the Overlapped Partitions Method, a method for the solution of banded systems with strictly diagonal dominant coefficient matrices. The method is based on the solution of the independent systems that are obtained after the partition of the coefficient matrix. Some extra equations are added to each independent system in order to control the amount of error genereated by OPM. In the report, a formula is given to decide on the amount of overlapping depending on the maximum error allowed and the diagonal dominance of the system. In order to compare the method with other already existing methods, we studied the case of tridiagonal systems. We compared the method to the Cyclic Reduction and the Tricyclic Reduction algorithms. The tests were made on a processor of the vector multiprocessor Convex C-3480. OPM shows to be faster than CR for the cases tested in the report. For example, OPM is approximately a 20% faster than CR for the case of ρ=0.9 (weakest diagonal dominance tested) and N=50,000, while for the same diagonal dominance and N=100,000, OPM is a 45% faster. OPM also shows to be faster than TR in some of the cases tested here. For matrices with diagonal dominance ρ<0.8, OPM is faster than TR. Only for the case of matrices smaller than 70,000 and ρ=0.8, and for all the cases when ρ=0.9, TR shows to be faster than OPM. TR, though, is an algorithm that will not be comparatively so fast on scalar parallel computers. The reason is that it benefits directly from the features of vector processors as we said before. As a consequence, the implementation of OPM on scalar parallel computers will be compared to CR and other parallel algorithms suitable for this type of computers. There, OPM will have the advantage of not needing communication among processors. Also, if Gaussian elimination is used for the solution of the independent overlapped partitions, OPM will have a lower operation count for most of the diagonal dominances. Consequently, OPM will run faster than CR for most of the cases on scalar parallel …
منابع مشابه
Optimizing OPM for tridiagonal systems on vector uniprocessors
In the paper, we compare the optimum OPM to the Cyclic Reduction and the Tricyclic Reduction algorithms. In order to compare OPM with CR and TR we build a model for each of these two methods, too. The models built for the three algorithms show to be accurate (mean quadratic error lower than a 5% for the cases tested). For this reason the comparison of the three methods is made with their models...
متن کاملDirect Parallel Algorithms for Banded Linear Systems
We investigate direct algorithms to solve linear banded systems of equations on MIMD multiprocessor computers with distributed memory. We show that it is hard to beat ordinary one-processor Gaussian elimination. Numerical computation results from the Intel Paragon are given.
متن کاملDirect Parallel Algorithms for Banded
We investigate direct algorithms to solve linear banded systems of equations on MIMD multipro-cessor computers with distributed memory. We show that it is hard to beat ordinary one-processor Gaussian elimination. Numerical computation results from the Intel Paragon are given.
متن کاملGander A Survey of Direct Parallel Algorithms for Banded Linear Systems
We investigate direct algorithms to solve linear banded systems of equations on MIMD multiprocessor computers with distributed memory. We compare the coarse-grain parallel algorithms with ordinary one-processor Gaussian elimination. The parallel algorithms behave satisfactory only if the ratio of bandwidth and matrix order is very small. As a result of the high redundancy of the parallel algori...
متن کاملA new multi-step ABS model to solve full row rank linear systems
ABS methods are direct iterative methods for solving linear systems of equations, where the i-th iteration satisfies the first i equations. Thus, a system of m equations is solved in at most m ABS iterates. In 2004 and 2007, two-step ABS methods were introduced in at most [((m+1))/2] steps to solve full row rank linear systems of equations. These methods consuming less space, are more compress ...
متن کامل